6 research outputs found

    Sustainable scheduling policies for radio access networks based on LTE technology

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIn the LTE access networks, the Radio Resource Management (RRM) is one of the most important modules which is responsible for handling the overall management of radio resources. The packet scheduler is a particular sub-module which assigns the existing radio resources to each user in order to deliver the requested services in the most efficient manner. Data packets are scheduled dynamically at every Transmission Time Interval (TTI), a time window used to take the user’s requests and to respond them accordingly. The scheduling procedure is conducted by using scheduling rules which select different users to be scheduled at each TTI based on some priority metrics. Various scheduling rules exist and they behave differently by balancing the scheduler performance in the direction imposed by one of the following objectives: increasing the system throughput, maintaining the user fairness, respecting the Guaranteed Bit Rate (GBR), Head of Line (HoL) packet delay, packet loss rate and queue stability requirements. Most of the static scheduling rules follow the sequential multi-objective optimization in the sense that when the first targeted objective is satisfied, then other objectives can be prioritized. When the targeted scheduling objective(s) can be satisfied at each TTI, the LTE scheduler is considered to be optimal or feasible. So, the scheduling performance depends on the exploited rule being focused on particular objectives. This study aims to increase the percentage of feasible TTIs for a given downlink transmission by applying a mixture of scheduling rules instead of using one discipline adopted across the entire scheduling session. Two types of optimization problems are proposed in this sense: Dynamic Scheduling Rule based Sequential Multi-Objective Optimization (DSR-SMOO) when the applied scheduling rules address the same objective and Dynamic Scheduling Rule based Concurrent Multi-Objective Optimization (DSR-CMOO) if the pool of rules addresses different scheduling objectives. The best way of solving such complex optimization problems is to adapt and to refine scheduling policies which are able to call different rules at each TTI based on the best matching scheduler conditions (states). The idea is to develop a set of non-linear functions which maps the scheduler state at each TTI in optimal distribution probabilities of selecting the best scheduling rule. Due to the multi-dimensional and continuous characteristics of the scheduler state space, the scheduling functions should be approximated. Moreover, the function approximations are learned through the interaction with the RRM environment. The Reinforcement Learning (RL) algorithms are used in this sense in order to evaluate and to refine the scheduling policies for the considered DSR-SMOO/CMOO optimization problems. The neural networks are used to train the non-linear mapping functions based on the interaction among the intelligent controller, the LTE packet scheduler and the RRM environment. In order to enhance the convergence in the feasible state and to reduce the scheduler state space dimension, meta-heuristic approaches are used for the channel statement aggregation. Simulation results show that the proposed aggregation scheme is able to outperform other heuristic methods. When the aggregation scheme of the channel statements is exploited, the proposed DSR-SMOO/CMOO problems focusing on different objectives which are solved by using various RL approaches are able to: increase the mean percentage of feasible TTIs, minimize the number of TTIs when the RL approaches punish the actions taken TTI-by-TTI, and minimize the variation of the performance indicators when different simulations are launched in parallel. This way, the obtained scheduling policies being focused on the multi-objective criteria are sustainable. Keywords: LTE, packet scheduling, scheduling rules, multi-objective optimization, reinforcement learning, channel, aggregation, scheduling policies, sustainable

    A comparison of reinforcement learning algorithms in fairness-oriented OFDMA schedulers

    Get PDF
    Due to large-scale control problems in 5G access networks, the complexity of radioresource management is expected to increase significantly. Reinforcement learning is seen as apromising solution that can enable intelligent decision-making and reduce the complexity of differentoptimization problems for radio resource management. The packet scheduler is an importantentity of radio resource management that allocates users’ data packets in the frequency domainaccording to the implemented scheduling rule. In this context, by making use of reinforcementlearning, we could actually determine, in each state, the most suitable scheduling rule to be employedthat could improve the quality of service provisioning. In this paper, we propose a reinforcementlearning-based framework to solve scheduling problems with the main focus on meeting the userfairness requirements. This framework makes use of feed forward neural networks to map momentarystates to proper parameterization decisions for the proportional fair scheduler. The simulation resultsshow that our reinforcement learning framework outperforms the conventional adaptive schedulersoriented on fairness objective. Discussions are also raised to determine the best reinforcement learningalgorithm to be implemented in the proposed framework based on various scheduler settings

    Self-Learning-Based Data Aggregation Scheduling Policy in Wireless Sensor Networks

    No full text
    The problems of reducing the transmission delay and maximizing the sensor lifetime are always hot research topics in the domain of wireless sensor networks (WSNs). By excluding the influence of routing protocol on the transmission direction of data packets, the MAC protocol which controls the time point of transmission and reception is also an important factor on the communication performance. Many existing works attempt to address these problems by using time slot scheduling policy. However, most of them exploit the global network knowledge to construct a stationary scheduling, which violates the dynamic and scalable nature of WSNs. In order to realize the distributed computation and self-learning, we propose to integrate the Q-learning into the exploring process of an adaptive slot scheduling with high efficiency. Due to the convergence nature, the scheduling quickly approaches an approximate optimal sequence along with the execution of frames. By conducting the corresponding simulations, the feasibility and the high efficiency of the proposed method can be validated

    Multi objective resource scheduling in LTE networks using reinforcement learning

    No full text
    The use of the intelligent packet scheduling process is absolutely necessary in order to make the radio resources usage more efficient in recent high-bit-rate demanding radio access technologies such as Long Term Evolution (LTE). Packet scheduling procedure works with various dispatching rules with different behaviors. In the literature, the scheduling disciplines are applied for the entire transmission sessions and the scheduler performance strongly depends on the exploited discipline. The method proposed in this paper aims to discuss how a straightforward schedule can be provided within the transmission time interval (TTI) sub-frame using a mixture of dispatching disciplines per TTI instead of a single rule adopted across the whole transmission. This is to maximize the system throughput while assuring the best user fairness. This requires adopting a policy of how to mix the rules and a refinement procedure to call the best rule each time. Two scheduling policies are proposed for how to mix the rules including use of Q learning algorithm for refining the policies. Simulation results indicate that the proposed methods outperform the existing scheduling techniques by maximizing the system throughput without harming the user fairness performance. © 2012, IGI Global

    A novel dynamic Q-learning-based scheduler technique for LTE-advanced technologies using neural networks

    No full text
    The tradeoff concept between system capacity and user fairness attracts a big interest in LTE-Advanced resource allocation strategies. By using static threshold values for throughput or fairness, regardless the network conditions, makes the scheduler to be inflexible when different tradeoff levels are required by the system. This paper proposes a novel dynamic neural Q-learning-based scheduling technique that achieves a flexible throughput-fairness tradeoff by offering optimal solutions according to the Channel Quality Indicator (CQI) for different classes of users. The Q-learning algorithm is used to adopt different policies of scheduling rules, at each Transmission Time Interval (TTI). The novel scheduling technique makes use of neural networks in order to estimate proper scheduling rules for different states which have not been explored yet. Simulation results indicate that the novel proposed method outperforms the existing scheduling techniques by maximizing the system throughput when different levels of fairness are required. Moreover, the system achieves a desired throughput-fairness tradeoff and an overall satisfaction for different classes of users

    Use of high-performance polymeric materials in customized low-cost robotic grippers for biomechatronic applications: experimental and analytical research

    No full text
    Advancements in materials science and 3D printing technologies have opened up new avenues for developing low-cost robotic grippers with high-performance capabilities, making them suitable for various biomechatronic applications. In this research, it has been explored the utilization of high-performance polymer materials, such as Polyetherketoneketone (PEKK), Polyethylene Terephthalate Glycol (PET-G) and MED 857 (DraftWhite), in the designing and developing of customized robotic grippers. The primary focus of made analyses was oriented on materials characterization, both experimentally and analytically. Computer-Aided Engineering (CAE) methods were employed to simulate bending experiments, allowing for a comprehensive analysis of the mechanical behavior of the selected materials. These simulations were validated through physical bending experiments using samples fabricated via 3D printing technologies, including Fused Filament Fabrication (FFF) for PET-G and PEKK, as well as Jetted Photopolymer (PolyJet) technology employing UV Resin for MED 857. The findings of this research provided advantages of utilizing advanced materials like PEKK in low-cost robotic grippers for biomechatronic applications. The experimental and analytical approaches offer valuable insights into material selection, design optimization, and the development of cost-effective high-performing robotic systems with a wide range of applications in the field of biomechatronics
    corecore